Goto

Collaborating Authors

 route choice


A Bayesian latent class reinforcement learning framework to capture adaptive, feedback-driven travel behaviour

Sfeir, Georges, Hess, Stephane, Hancock, Thomas O., Rodrigues, Filipe, Rad, Jamal Amani, Bliemer, Michiel, Beck, Matthew, Khan, Fayyaz

arXiv.org Machine Learning

Many travel decisions involve a degree of experience formation, where individuals learn their preferences over time. At the same time, there is extensive scope for heterogeneity across individual travellers, both in their underlying preferences and in how these evolve. The present paper puts forward a Latent Class Reinforcement Learning (LCRL) model that allows analysts to capture both of these phenomena. We apply the model to a driving simulator dataset and estimate the parameters through Variational Bayes. We identify three distinct classes of individuals that differ markedly in how they adapt their preferences: the first displays context-dependent preferences with context-specific exploitative tendencies; the second follows a persistent exploitative strategy regardless of context; and the third engages in an exploratory strategy combined with context-specific preferences.


Capturing Context-Aware Route Choice Semantics for Trajectory Representation Learning

Cao, Ji, Wang, Yu, Zheng, Tongya, Song, Jie, Guo, Qinghong, Ren, Zujie, Jin, Canghong, Chen, Gang, Song, Mingli

arXiv.org Artificial Intelligence

Abstract--Trajectory representation learning (TRL) aims to encode raw trajectory data into low-dimensional embeddings for downstream tasks such as travel time estimation, mobility prediction, and trajectory similarity analysis. From a behavioral perspective, a trajectory reflects a sequence of route choices within an urban environment. However, most existing TRL methods ignore this underlying decision-making process and instead treat trajectories as static, passive spatiotemporal sequences, thereby limiting the semantic richness of the learned representations. T o bridge this gap, we propose CORE, a TRL framework that integrates context-aware route choice semantics into trajectory embeddings. CORE first incorporates a multi-granular Environment Perception Module, which leverages large language models (LLMs) to distill environmental semantics from point of interest (POI) distributions, thereby constructing a context-enriched road network. Building upon this backbone, CORE employs a Route Choice Encoder with a mixture-of-experts (MoE) architecture, which captures route choice patterns by jointly leveraging the context-enriched road network and navigational factors. Extensive experiments on 4 real-world datasets across 6 downstream tasks demonstrate that CORE consistently outperforms 12 state-of-the-art TRL methods, achieving an average improvement of 9.79% over the best-performing baseline. Our code is available at https://github.com/caoji2001/CORE. Ji Cao, Y u Wang, Gang Chen, and Mingli Song are with the College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China; Ji Cao is also with the Zhejiang Lab, Hangzhou 311121, China (email: {caoj25, yu.wang, cg, brooksong}@zju.edu.cn). Tongya Zheng and Canghong Jin are with the Zhejiang Provincial Engineering Research Center for Real-Time SmartTech in Urban Security Governance, Hangzhou City University, Hangzhou 310015, China (e-mail: doujiang zheng@163.com; Jie Song is with the School of Software Technology, Zhejiang University, Ningbo 315100, China (e-mail: sjie@zju.edu.cn).


Equilibria in routing games with connected autonomous vehicles will not be strong, as exclusive clubs may form

Kucharski, Rafał, Psarou, Anastasia, Descormier, Natello

arXiv.org Artificial Intelligence

User Equilibrium is the standard representation of the so-called routing game in which drivers adjust their route choices to arrive at their destinations as fast as possible. Asking whether this Equilibrium is strong or not was meaningless for human drivers who did not form coalitions due to technical and behavioral constraints. This is no longer the case for connected autonomous vehicles (CAVs), which will be able to communicate and collaborate to jointly form routing coalitions. We demonstrate this for the first time on a carefully designed toy-network example, where a `club` of three autonomous vehicles jointly decides to deviate from the user equilibrium and benefit (arrive faster). The formation of such a club has negative consequences for other users, who are not invited to join it and now travel longer, and for the system, making it suboptimal and disequilibrated, which triggers adaptation dynamics. This discovery has profound implications for the future of our cities. We demonstrate that, if not prevented, CAV operators may intentionally disequilibrate traffic systems from their classic Nash equilibria, benefiting their own users and imposing costs on others. These findings suggest the possible emergence of an exclusive CAV elite, from which human-driven vehicles and non-coalition members may be excluded, potentially leading to systematically longer travel times for those outside the coalition, which would be harmful for the equity of public road networks.


Autonomous vehicles need social awareness to find optima in multi-agent reinforcement learning routing games

Psarou, Anastasia, Gorczyca, Łukasz, Gaweł, Dominik, Kucharski, Rafał

arXiv.org Artificial Intelligence

Previous work has shown that when multiple selfish Autonomous Vehicles (AVs) are introduced to future cities and start learning optimal routing strategies using Multi-Agent Reinforcement Learning (MARL), they may destabilize traffic systems, as they would require a significant amount of time to converge to the optimal solution, equivalent to years of real-world commuting. We demonstrate that moving beyond the selfish component in the reward significantly relieves this issue. If each AV, apart from minimizing its own travel time, aims to reduce its impact on the system, this will be beneficial not only for the system-wide performance but also for each individual player in this routing game. By introducing an intrinsic reward signal based on the marginal cost matrix, we significantly reduce training time and achieve convergence more reliably. Marginal cost quantifies the impact of each individual action (route-choice) on the system (total travel time). Including it as one of the components of the reward can reduce the degree of non-stationarity by aligning agents' objectives. Notably, the proposed counterfactual formulation preserves the system's equilibria and avoids oscillations. Our experiments show that training MARL algorithms with our novel reward formulation enables the agents to converge to the optimal solution, whereas the baseline algorithms fail to do so. We show these effects in both a toy network and the real-world network of Saint-Arnoult. Our results optimistically indicate that social awareness (i.e., including marginal costs in routing decisions) improves both the system-wide and individual performance of future urban systems with AVs.


Reinforcement Learning-based Sequential Route Recommendation for System-Optimal Traffic Assignment

Wang, Leizhen, Duan, Peibo, Lyu, Cheng, Ma, Zhenliang

arXiv.org Artificial Intelligence

Modern navigation systems and shared mobility platforms increasingly rely on personalized route recommendations to improve individual travel experience and operational efficiency. However, a key question remains: can such sequential, personalized routing decisions collectively lead to system-optimal (SO) traffic assignment? This paper addresses this question by proposing a learning-based framework that reformulates the static SO traffic assignment problem as a single-agent deep reinforcement learning (RL) task. A central agent sequentially recommends routes to travelers as origin-destination (OD) demands arrive, to minimize total system travel time. To enhance learning efficiency and solution quality, we develop an MSA-guided deep Q-learning algorithm that integrates the iterative structure of traditional traffic assignment methods into the RL training process. The proposed approach is evaluated on both the Braess and Ortuzar-Willumsen (OW) networks. Results show that the RL agent converges to the theoretical SO solution in the Braess network and achieves only a 0.35% deviation in the OW network. Further ablation studies demonstrate that the route action set's design significantly impacts convergence speed and final performance, with SO-informed route sets leading to faster learning and better outcomes. This work provides a theoretically grounded and practically relevant approach to bridging individual routing behavior with system-level efficiency through learning-based sequential assignment.


RouteRL: Multi-agent reinforcement learning framework for urban route choice with autonomous vehicles

Akman, Ahmet Onur, Psarou, Anastasia, Gorczyca, Łukasz, Varga, Zoltán György, Jamróz, Grzegorz, Kucharski, Rafał

arXiv.org Artificial Intelligence

RouteRL is a novel framework that integrates multi-agent reinforcement learning (MARL) with a microscopic traffic simulation, facilitating the testing and development of efficient route choice strategies for autonomous vehicles (AVs). The proposed framework simulates the daily route choices of driver agents in a city, including two types: human drivers, emulated using behavioral route choice models, and AVs, modeled as MARL agents optimizing their policies for a predefined objective. RouteRL aims to advance research in MARL, transport modeling, and human-AI interaction for transportation applications. This study presents a technical report on RouteRL, outlines its potential research contributions, and showcases its impact via illustrative examples.


AI-Driven Day-to-Day Route Choice

Wang, Leizhen, Duan, Peibo, He, Zhengbing, Lyu, Cheng, Chen, Xin, Zheng, Nan, Yao, Li, Ma, Zhenliang

arXiv.org Artificial Intelligence

Understanding individual travel behaviors is critical for developing efficient and sustainable transportation systems. Travel behavioral analysis aims to capture the decision-making process of individual travel execution, including travel route choice, travel mode choice, departure time choice, and trip purpose. Among these choices, modeling route choice not only helps analyze and understand travelers' behaviors, but also constitutes the essential part of traffic assignment methods [1]. Specifically, it enables the evaluation of travelers' perceptions of route characteristics, the forecasting of behavior in hypothetical scenarios, the prediction of future traffic dynamics on transportation networks, and the understanding of travelers' responses to travel information. Real-world route choice is complex because of the inherent difficulties in accurately representing human behavior, travelers' limited knowledge of network composition, uncertainties in perceptions of route characteristics, and the lack of precise information about travelers' preferences [1]. To overcome these limitations, DTD traffic dynamics have attracted significant attention since they focus on drivers' dynamic shifts in route choices and the evolution of traffic flow over time, rather than merely static equilibrium states. DTD models are flexible to incorporate diverse behavioral rules such as forecasting [2, 3], bounded rationality [4, 5], decision-making based on prospects [6, 7], marginal utility effects [8, 9], and social interactions [10]. Despite these advantages identified in [11] and [12], DTD models still struggle to accurately reflect the observed fluctuations in traffic dynamics, particularly the persistent deviations around User Equilibrium (UE) noted in empirical studies [13, 14, 15]. To better understand traffic dynamics, Agent-Based Modeling (ABM) offers a promising alternative.


Social impact of CAVs -- coexistence of machines and humans in the context of route choice

Jamróz, Grzegorz, Akman, Ahmet Onur, Psarou, Anastasia, Varga, Zoltán Györgi, Kucharski, Rafał

arXiv.org Artificial Intelligence

Suppose in a stable urban traffic system populated only by human driven vehicles (HDVs), a given proportion (e.g. 10%) is replaced by a fleet of Connected and Autonomous Vehicles (CAVs), which share information and pursue a collective goal. Suppose these vehicles are centrally coordinated and differ from HDVs only by their collective capacities allowing them to make more efficient routing decisions before the travel on a given day begins. Suppose there is a choice between two routes and every day each driver makes a decision which route to take. Human drivers maximize their utility. CAVs might optimize different goals, such as the total travel time of the fleet. We show that in this plausible futuristic setting, the strategy CAVs are allowed to adopt may result in human drivers either benefitting or being systematically disadvantaged and urban networks becoming more or less optimal. Consequently, some regulatory measures might become indispensable.


Urban Bike Lane Planning with Bike Trajectories: Models, Algorithms, and a Real-World Case Study

Liu, Sheng, Shen, Zuo-Jun Max, Ji, Xiang

arXiv.org Artificial Intelligence

We study an urban bike lane planning problem based on the fine-grained bike trajectory data, which is made available by smart city infrastructure such as bike-sharing systems. The key decision is where to build bike lanes in the existing road network. As bike-sharing systems become widespread in the metropolitan areas over the world, bike lanes are being planned and constructed by many municipal governments to promote cycling and protect cyclists. Traditional bike lane planning approaches often rely on surveys and heuristics. We develop a general and novel optimization framework to guide the bike lane planning from bike trajectories. We formalize the bike lane planning problem in view of the cyclists' utility functions and derive an integer optimization model to maximize the utility. To capture cyclists' route choices, we develop a bilevel program based on the Multinomial Logit model. We derive structural properties about the base model and prove that the Lagrangian dual of the bike lane planning model is polynomial-time solvable. Furthermore, we reformulate the route choice based planning model as a mixed integer linear program using a linear approximation scheme. We develop tractable formulations and efficient algorithms to solve the large-scale optimization problem. Via a real-world case study with a city government, we demonstrate the efficiency of the proposed algorithms and quantify the trade-off between the coverage of bike trips and continuity of bike lanes. We show how the network topology evolves according to the utility functions and highlight the importance of understanding cyclists' route choices. The proposed framework drives the data-driven urban planning scheme in smart city operations management.


Experiments on route choice set generation using a large GPS trajectory set

Yao, Rui, Bekhor, Shlomo

arXiv.org Machine Learning

Several route choice models developed in the literature were based on a relatively small number of observations. With the extensive use of tracking devices in recent surveys, there is a possibility to obtain insights with respect to the traveler's choice behavior. In this paper, different path generation algorithms are evaluated using a large GPS trajectory dataset. The dataset contains 6,000 observations from Tel-Aviv metropolitan area. An initial analysis is performed by generating a single route based on the shortest path. Almost 60% percent of the 6,000 observations can be covered (assuming a threshold of 80% overlap) using a single path. This result significantly contrasts previous literature findings. Link penalty, link elimination, simulation and via-node methods are applied to generate route sets, and the consistency of the algorithms are compared. A modified link penalty method, which accounts for preference of using higher hierarchical roads, provides a route set with 97% coverage (80% overlap threshold). The via-node method produces route set with satisfying coverage, and generates routes that are more heterogeneous (in terms number of links and routes ratio).